📚 node [[dqn|dqn]]
Welcome! Nobody has contributed anything to 'dqn|dqn' yet. You can:
-
Write something in the document below!
- There is at least one public document in every node in the Agora. Whatever you write in it will be integrated and made available for the next visitor to read and edit.
- Write to the Agora from social media.
-
Sign up as a full Agora user.
- As a full user you will be able to contribute your personal notes and resources directly to this knowledge commons. Some setup required :)
⥅ related node [[deep_q network_(dqn)]]
⥅ related node [[dqn]]
⥅ node [[dqn]] pulled by Agora
📓
garden/KGBicheno/Artificial Intelligence/Introduction to AI/Week 3 - Introduction/Definitions/Dqn.md by @KGBicheno
DQN
Go back to the [[AI Glossary]]
#rl
Abbreviation for Deep Q-Network.
dropout regularization
A form of regularization useful in training neural networks. Dropout regularization works by removing a random selection of a fixed number of the units in a network layer for a single gradient step. The more units dropped out, the stronger the regularization. This is analogous to training the network to emulate an exponentially large ensemble of smaller networks. For full details, see Dropout: A Simple Way to Prevent Neural Networks from Overfitting.
📖 stoas
- public document at doc.anagora.org/dqn|dqn
- video call at meet.jit.si/dqn|dqn
🔎 full text search for 'dqn|dqn'